@Not_Oles said:
Also, might it be better for you to run your Docker container directly on the i9 server node rather than inside LXC?
Giving someone access to Docker Engine is equivalent to giving root, because it's trivial to become root on the host machine where Docker Engine is running.
This is why Docker Engine must run inside unprivileged LXC or KVM.
I'm 100% okay with the /112 subnet deployment. Nevertheless,
assumiing the goal is to have one WAN IPv6 address for your LXC container and another WAN IPv6 address for the Docker container that you make inside your LXC container,
Docker needs one container per application.
For example, nginx and PHP would be separate containers, each in its own network namespace.
could this be set up with two IPv6/128 addresses both talking to the node and with each other in the link layer?
It's impossible with Docker networking.
The IPAM component expects a routed subnet and cannot allocate from disjoint addresses.
@yoursunny may I ask why you dont use the experiment IPv6 settings for docker? Whats the benefit of using different IPv6 addresses for each container instead of the default NAT setup docker uses?
@buddermilch said: @yoursunny may I ask why you dont use the experiment IPv6 settings for docker? Whats the benefit of using different IPv6 addresses for each container instead of the default NAT setup docker uses?
It's explained at the top of IPv6 Neighbor Discovery Responder for KVM VPS: Network Address Translation (NAT) changes the source port number of outgoing UDP datagrams, even if there's a port forwarding rule for inbound traffic; consequently, a UDP flow with the same source and destination ports is being recognized as two separate flows.
Indeed, maybe it might be better if all the containers got routed /112?
Yes, give each LXC container the largest allocation you could afford.
If the physical server has /64, each LXC container should get /80.
Setting up routed containers with Proxmox might take me awhile because I have not tried it before.
The manual setup is adding these to a script:
ip neigh add 2001:db8:e3af:d2af:1100::1 lladdr f2:2a:57:ab:1b:a6 dev brlink
ip route add 2001:db8:e3af:d2af:1100::/80 dev brlink via 2001:db8:e3af:d2af:1100::1 onlink
If the physical service doesn't have routed IPv6, you also need ndpresponder to turn the onlink subnet into routed subnet.
@yoursunny Thanks for helpful comments and answers to questions! Thanks especially for comments and answers which contain terms specific enough to permit google searching! And thanks double especially for your linked ndpresponder article!
If the physical server has /64, each LXC container should get /80.
It sounds like you might want your i9.metalvps.com LXC container to receive an /80 instead of a /112. Or should your container receive a /124 because the ndpresponder README.md says, "It's recommended to keep the subnets as small as possible"? Of course I am just kidding! You can have whatever size you want.
I will try and we will see whether I can set up a container and ndpresponder for you, probably tomorrow. It will be fun! Thank you again for your helpful instructions and for giving me the opportunity to try.
When I first posted this thread, the OP wasn't clear about what I meant by "helpfulness." Also, I could have been more clear about "privacy."
When I asked for links showing helpfulness I was imagining more of a tech orientation toward "helpfulness." I wanted the free server accounts to go to people who helped others with technical issues such as configuration and programming. People who ask good technical questions (specific, supported by data, error messages, etc.) also would be equally welcome on the server.
It's important that LESbians assist LES by participating in discussions of seller practices and LES policy. These discussions are important to all of us, so hopefully I or someone else will do a giveaway based on helpfulness in important, but non-tech ways.
A second area in which the OP was unclear is privacy. Some people do not seem to mind everyone knowing who they are. Mason is a good example. Other people prefer limiting disclosure and using PMs rather than posting in the public thread. I have no problem with people posting anonymously in forums, but, the giveaway server is at Hetzner. I imagine Hetzner might prefer me knowing who is using the server I am renting from them. Without any criticism of people who value privacy, perhaps this giveaway server might be more for people both willing to post their request, questions, and support tickets here in this public thread. Perhaps this giiveaway server also is for people okay with other LESbians knowing who and maybe where they are. Of course, I want to be flexible, because I imagine that some people might need more privacy than others.
I might edit the OP to clarify the tech orientation and the privacy issues. What do you guys think?
I guess people who want to use the server have no problem revealing their identities to you in private. After all, the reveal is between them and you. This is also a typical type of reveal between VPS providers and their customers.
Asking people to post their personal information here, publicly, might be a privacy concern to many.
Installing IPv6 Neighbor Discovery Responder On Proxmox-VE 7.3-4 -- Part I
@Not_Oles said: I will try and we will see whether I can set up a container and ndpresponder for you, probably tomorrow. It will be fun! Thank you again for your helpful instructions and for giving me the opportunity to try.
Previously in this thread there has been discussion of how to get Docker working inside an LXC container. As discussed previously, Docker's networking setup doesn't cover situations where Docker does not receive responses to IPv6 Neighbor Discovery Requests (for example, as here, when the node is itself provisioned with on-link IPv6 instead of routed IPv6).
According to the ndpresponder Github README.md, "ndpresponder is a Go program that listens for ICMPv6 neighbor solicitations on a network interface and responds with neighbor advertisements, as described in RFC 4861 - IPv6 Neighbor Discovery Protocol."
The README.md also says, "This program is written in Go. It requires both Go compiler and C compiler." Checking our server's previously installed Hetzner installimage Proxmox-VE 7.3-4 suggests that neither go nor gcc are present.
root@Proxmox-VE ~ # which go
root@Proxmox-VE ~ # which gcc
root@Proxmox-VE ~ #
The Go Download and install instructions cover installing Go from compiled binaries. The binary install procedure begins with a Download.
Go also provides Installing Go from source instructions. The installing from source instructions refer to "two official go compiler toolchains," the "gc Go compiler," written in Go, and "gccgo, a more traditional compiler using the GCC back end. . . ." gcc and not gccgo might be the "C compiler" referred to in the ndpresponder README.md.
The current version of Go available within the Debian package system for Debian 11 is 1.15. However, the Go version available via Go Downloads is 1.19.5.
Some userland programs in Proxmox are different than in Debian. Proxmox maintains its own source repository. You can see, for example, that "gcc" does not appear in the Proxmox repository and that "qemu" appears several times. Previously, I noticed that Proxmox seemed to have the apt package management system well set up to issue warnings when there might be a conflict between a Debian install and the Proxmox version.
Many people might consider it a bad practice to install considerable additional software on a Proxmox node. They might be right, especially if the node-installed software somehow conflicts with Proxmox. Well, MetalVPS is, as stated in the Warnings section of the OP, ephemeral.
Under all the circumstances, maybe the procedure to be followed might be:
install gcc and friends, including git
install the Go binary distribution
clone the Go sources
take a peek at the Go sources
compile and install Go from source (requires Go binary distribution or other options)
You don't need to install compilers on the host machine.
Instead, make an LXC container, install build-essential (provides C compiler) and Go 1.19, and compile ndpresponder binary.
Then, copy the binary to the host machine, and setup systemd service.
Our local genius boy @Not_Oles clicked a button that shutdown the server. One of those "Ooops, what did I just do?" moments.
Neither the Hetzner Robot control panel Ctrl+Alt+Del nor automatic hardware reset seemed to work to restart the server. The server also did not restart after Rescue System activation. I sent a ticket to Hetzner asking them to try a restart. They restarted the server in the rescue system.
All the containers except 101 respond to ping. 101 seems to have some traffic. So I think the situation is resolved. It is getting late here, so I am going to sleep now. Remaining issues, if any can be addressed in the morning.
@yoursunny said:
You don't need to install compilers on the host machine.
Instead, make an LXC container, install build-essential (provides C compiler) and Go 1.19, and compile ndpresponder binary.
Then, copy the binary to the host machine, and setup systemd service.
Hi @yoursunny! Okay, new container made, gcc installed.
root@107-compiler:~# gcc --version
gcc (Debian 10.2.1-6) 10.2.1 20210110
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
root@107-compiler:~#
Thanks very much for sharing your commands to install Go! I had a great time reading your commands and comparing your commands with the officially suggested Go Download and Install commands. I read a few man pages, including the update-alternatives man page. I also did a few google searches. One interesting topic I read about was the differences between the use of "|| :" and "|| true".
Your curl command uses "$()" for Command Substitution. Because "curl" begins with the letter "c," and because -- just for fun -- your curl command might be considered ever so slightly "obfuscated," your command substitution with curl made me imagine changing "International Obfuscated C Code Contest" to "International Obfuscated Curl Code Contest."
The most recent winning entries on the IOCCC website seem to be from the 27th Contest in 2020. The most recent news entry is dated last month. I was glad to see "We plan to hold IOCCC28 in 2023." in the news entry dated 2021-12-27.
One thing I wondered about was whether your Go install commands presented above are part of a more comprehensive provisioning system that you use. Searching Google for "install vps site:yoursunny.com" led me to Install Ubuntu from ISO on IPv6-only KVM Server in SolusIO.
I haven't tried running your commands yet. Let me go (pun intentional) back up the container, then run your commands, and then post about what happened.
Thanks again for all the fun I had reading your Go install commands! 🌟
Turns out root missed Endurance when she sailed. Had root been aboard Endurance, he could have changed the hostname of the "107-compiler" container to "antarctica" and added a user with sudo called "shackleton." The container's build user, shackleton, would be much better prepared for IPv9.
Below is a slightly sanitized version of what happened when I (almost) followed @yoursunny's suggested commands.
root@Proxmox-VE ~ # date -u
Sat 14 Jan 2023 09:13:01 PM UTC
root@Proxmox-VE ~ # lxc-attach -n 107
root@107-compiler:~# # Install Go with @yoursunny's commands
root@107-compiler:~# # https://lowendspirit.com/discussion/comment/123193/#Comment_123193
root@107-compiler:~# rm -rf /usr/local/go
root@107-compiler:~# curl -sfLS https://go.dev/dl/$(curl -sfLS https://go.dev/VERSION?m=text).linux-amd64.tar.gz | tar -C /usr/local -xz
root@107-compiler:~# ls /usr/local
bin etc games go include lib man sbin share src
root@107-compiler:~# if ! grep -q go/bin ~/.bashrc; then
echo 'export PATH=${HOME}/go/bin${PATH:+:}${PATH}' >>~/.bashrc
fi
root@107-compiler:~# tail -n 1 .bashrc
export PATH=${HOME}/go/bin${PATH:+:}${PATH}
root@107-compiler:~# update-alternatives --remove-all go || true
update-alternatives: error: no alternatives for go
root@107-compiler:~# update-alternatives --install /usr/bin/go go /usr/local/go/bin/go 1
update-alternatives: using /usr/local/go/bin/go to provide /usr/bin/go (go) in auto mode
root@107-compiler:~# update-alternatives --remove-all gofmt || true
update-alternatives: error: no alternatives for gofmt
root@107-compiler:~# update-alternatives --install /usr/bin/gofmt gofmt /usr/local/go/bin/gofmt 1
update-alternatives: using /usr/local/go/bin/gofmt to provide /usr/bin/gofmt (gofmt) in auto mode
root@107-compiler:~# which go
/usr/bin/go
root@107-compiler:~# go version
go version go1.19.5 linux/amd64
root@107-compiler:~# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
root@107-compiler:~# exit
exit
root@Proxmox-VE ~ # lxc-attach -n 107
root@107-compiler:~# echo $PATH
/root/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
root@107-compiler:~# date -u
Sat Jan 14 21:43:44 UTC 2023
root@107-compiler:~# exit
Please consider the flexible requirements to (1) share a link to a post here on LES where you helped someone else answer a tech question and (2) either be willing to share your identity or tell us your good reason not to share it.
I took a quick look about Gitlab, but wasn't immediately clear whether Gitlab has full IPv6 functionality. Gitlab does seem to have an IPv6 response to host:
root@107-compiler:~# host gitlab.com
gitlab.com has address 172.65.251.78
gitlab.com has IPv6 address 2606:4700:90:0:f22e:fbec:5bed:a9b9
[mail servers]
root@107-compiler:~#
Today is January 18, 2023. In a few days, around January 25, I expect to receive and pay the first billing for this server. This server is set to cancel on February 7, 2023. So February 7 is the last day that this server will be available unless the cancellation is revoked.
Hetzner has told me that they are okay with my keeping servers only for one month. Hetzner will be paid the full amount for the full month, so what is happening here is not the same as cancelling during their trial period where they receive nothing.
I am delighted that people are using the server! I would be glad to add more free accounts if people are interested. If you are interested, please read the OP and skim the thread before asking.
I think it would be very wonderful if guys using the server shared a little about what they are doing and about their setups inside their containers.
Prior to February 7, I hope to offer the guys presently using this server an opportunity to join a new server. Already I have added a second Hetzner server, and I might add a third. Additionally I am talking with another provider. Alternatively, I might simply revoke this server's pending cancellation.
This server has both 128 GB RAM and also a 16 TB HDD in addition to the 2 x 1 TB NVMe drives. If anybody might want to transfer this particular setup, please let me know.
@Not_Oles said: Can it really be true that Github doesn't have IPv6?
Unfortunately.
https://nat64.xyz/ seemes really useful for that case (just use 2001:67c:2960::64 as your nameserver and it will use the NAT64 service for IPv4-only services).
I didn't really want to change the system resolv.conf, so I wrote a simple wrapper script (using bubblewrap) to only use that resolver for certain processes:
@Not_Oles said: I think it would be very wonderful if guys using the server shared a little about what they are doing and about their setups inside their containers.
Not sure I have done too much useful stuff yet... Mainly just figuring out what issues can arise from being IPv6 only. And I guess I am usually just too careful to build stuff as lean as possible so I can easily host it on my lowend VPSes.
Thanks to @cmeerw and @yoursunny and many others for helpful comments. One of the things I love the best about LES is that people are so generous with helpful comments.
@cmeerw said: Not sure I have done too much useful stuff yet... Mainly just figuring out what issues can arise from being IPv6 only. And I guess I am usually just too careful to build stuff as lean as possible so I can easily host it on my lowend VPSes.
@cmeerw I'm sure that you will build something useful and fun before too long!
By the way, I made your container before I figured out how to add the hard disk. Want me to add a few TB of HDD space? I think I have to reboot your container to add the drive. Or we can just leave everything peaceful if you don't need the HDD.
I imagined the guys here being excited about the HDD. All the time people seem to be looking for backup space. But there has been hardly anything said about the 16 TB of HDD on this free server.
The 16 TB HDD plus the increase from 64 to 128 GB RAM seems to raise the cost of the server by about $30/month.
It seems the server might have booted about four hours ago. What happened? When? Why?
root@Proxmox-VE ~ # date
Fri 20 Jan 2023 05:39:09 AM UTC
root@Proxmox-VE ~ #
From /var/log/messages:
Jan 19 16:13:03 Proxmox-VE kernel: [552651.050197] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/ZAAOPBF6QOV76SOU4CURM2ZQUD' does not support file handles, falling
back to xino=off.
Jan 19 16:13:03 Proxmox-VE kernel: [552651.075135] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/OO54Z7VIVSLWLVUU2TJPBYRHHU' does not support file handles, falling
back to xino=off.
Jan 19 16:13:03 Proxmox-VE kernel: [552651.101942] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/OO54Z7VIVSLWLVUU2TJPBYRHHU' does not support file handles, falling
back to xino=off.
Jan 19 16:13:04 Proxmox-VE kernel: [552651.978316] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/ZAAOPBF6QOV76SOU4CURM2ZQUD' does not support file handles, falling
back to xino=off.
Jan 19 16:13:04 Proxmox-VE kernel: [552652.177562] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/ZAAOPBF6QOV76SOU4CURM2ZQUD' does not support file handles, falling
back to xino=off.
Jan 19 16:13:04 Proxmox-VE kernel: [552652.202955] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/KG6XV5ST5DRG7QBYN5VCVUJLD2' does not support file handles, falling
back to xino=off.
Jan 19 16:13:05 Proxmox-VE kernel: [552652.222600] overlayfs: fs on '/root/Download/docke
r-data-root/overlay2/l/KG6XV5ST5DRG7QBYN5VCVUJLD2' does not support file handles, falling
back to xino=off.
Jan 19 16:13:05 Proxmox-VE kernel: [552653.091045] overlayfs: fs on '/root/Download/docker-data-root/overlay2/l/ZAAOPBF6QOV76SOU4CURM2ZQUD' does not support file handles, falling back to xino=off.
Jan 20 01:37:25 Proxmox-VE kernel: [ 0.000000] microcode: microcode updated early to revision 0xf0, date = 2021-11-16
Jan 20 01:37:25 Proxmox-VE kernel: [ 0.000000] Linux version 5.15.83-1-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PVE 5.15.83-1 (2022-12-15T00:00Z) ()
@Not_Oles said: I imagined the guys here being excited about the HDD. All the time people seem to be looking for backup space. But there has been hardly anything said about the 16 TB of HDD on this free server.
I only have about 50-55 GB of data I really need to back up (and 45 GB of that are just personal photos, the rest is the important stuff I spread around free storage plans, obviously encrypted).
@Not_Oles said: I imagined the guys here being excited about the HDD. All the time people seem to be looking for backup space. But there has been hardly anything said about the 16 TB of HDD on this free server.
I only have about 50-55 GB of data I really need to back up (and 45 GB of that are just personal photos, the rest is the important stuff I spread around free storage plans, obviously encrypted).
Okay, thank you for kindly letting me know.
I asked Hetzner whether they might be willing to remove the HDD, and, if they did, how the price would change. We will see what Hetzner says.
Hetzner said they are unaware of any incident which might have caused the server to reboot.
Proxmox now has a kernel update, and suggests that we consider a reboot. Right now it's Fri 20 Jan 2023 08:30:34 PM UTC. Will try the reboot in a moment. . . .
Comments
I went googling really quickly and found How to start a Docker Image within Proxmox?.
One of the posts in this thread says,
Is that right? I wasn't aware that Docker was not fully open source. I wonder how much Docker stuff runs on the OCI.
MetalVPS
Giving someone access to Docker Engine is equivalent to giving root, because it's trivial to become root on the host machine where Docker Engine is running.
This is why Docker Engine must run inside unprivileged LXC or KVM.
Docker needs one container per application.
For example, nginx and PHP would be separate containers, each in its own network namespace.
It's impossible with Docker networking.
The IPAM component expects a routed subnet and cannot allocate from disjoint addresses.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
I got my cool lxc instance from @Not_Oles , it works very well. Thx for the giveaway, will play with it later .
@yoursunny may I ask why you dont use the experiment IPv6 settings for docker? Whats the benefit of using different IPv6 addresses for each container instead of the default NAT setup docker uses?
It's explained at the top of IPv6 Neighbor Discovery Responder for KVM VPS:
Network Address Translation (NAT) changes the source port number of outgoing UDP datagrams, even if there's a port forwarding rule for inbound traffic; consequently, a UDP flow with the same source and destination ports is being recognized as two separate flows.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
@yoursunny Thanks for helpful comments and answers to questions! Thanks especially for comments and answers which contain terms specific enough to permit google searching! And thanks double especially for your linked ndpresponder article!
It sounds like you might want your i9.metalvps.com LXC container to receive an /80 instead of a /112. Or should your container receive a /124 because the ndpresponder README.md says, "It's recommended to keep the subnets as small as possible"? Of course I am just kidding! You can have whatever size you want.
I will try and we will see whether I can set up a container and ndpresponder for you, probably tomorrow. It will be fun! Thank you again for your helpful instructions and for giving me the opportunity to try.
MetalVPS
When I first posted this thread, the OP wasn't clear about what I meant by "helpfulness." Also, I could have been more clear about "privacy."
When I asked for links showing helpfulness I was imagining more of a tech orientation toward "helpfulness." I wanted the free server accounts to go to people who helped others with technical issues such as configuration and programming. People who ask good technical questions (specific, supported by data, error messages, etc.) also would be equally welcome on the server.
It's important that LESbians assist LES by participating in discussions of seller practices and LES policy. These discussions are important to all of us, so hopefully I or someone else will do a giveaway based on helpfulness in important, but non-tech ways.
A second area in which the OP was unclear is privacy. Some people do not seem to mind everyone knowing who they are. Mason is a good example. Other people prefer limiting disclosure and using PMs rather than posting in the public thread. I have no problem with people posting anonymously in forums, but, the giveaway server is at Hetzner. I imagine Hetzner might prefer me knowing who is using the server I am renting from them. Without any criticism of people who value privacy, perhaps this giveaway server might be more for people both willing to post their request, questions, and support tickets here in this public thread. Perhaps this giiveaway server also is for people okay with other LESbians knowing who and maybe where they are. Of course, I want to be flexible, because I imagine that some people might need more privacy than others.
I might edit the OP to clarify the tech orientation and the privacy issues. What do you guys think?
Friendly greetings from Sonora! 🏜️
Tom
MetalVPS
Tom,
I guess people who want to use the server have no problem revealing their identities to you in private. After all, the reveal is between them and you. This is also a typical type of reveal between VPS providers and their customers.
Asking people to post their personal information here, publicly, might be a privacy concern to many.
Unless I totally misunderstood you... That is.
The all seeing eye sees everything...
Thanks for the correction, @terrorgen!
I'm okay with people sharing information privately with me. But, to the extent possible, I personally prefer open sharing.
I will try to think of more clear and simple wording.
Thanks again!
MetalVPS
At the present time I believe I have responded to everyone who has asked for an account. If I have missed anyone, please do remind me. Thank you!
MetalVPS
Installing IPv6 Neighbor Discovery Responder On Proxmox-VE 7.3-4 -- Part I
Previously in this thread there has been discussion of how to get Docker working inside an LXC container. As discussed previously, Docker's networking setup doesn't cover situations where Docker does not receive responses to IPv6 Neighbor Discovery Requests (for example, as here, when the node is itself provisioned with on-link IPv6 instead of routed IPv6).
@yoursunny has written a wonderful program called
ndpresponder
. Please see also @yoursunny's explanatory blog post, IPv6 Neighbor Discovery Responder for KVM VPS.According to the
ndpresponder
Github README.md, "ndpresponder is a Go program that listens for ICMPv6 neighbor solicitations on a network interface and responds with neighbor advertisements, as described in RFC 4861 - IPv6 Neighbor Discovery Protocol."The README.md also says, "This program is written in Go. It requires both Go compiler and C compiler." Checking our server's previously installed Hetzner installimage Proxmox-VE 7.3-4 suggests that neither
go
norgcc
are present.The Go Download and install instructions cover installing Go from compiled binaries. The binary install procedure begins with a Download.
Go also provides Installing Go from source instructions. The installing from source instructions refer to "two official go compiler toolchains," the "gc Go compiler," written in Go, and "gccgo, a more traditional compiler using the GCC back end. . . ."
gcc
and notgccgo
might be the "C compiler" referred to in thendpresponder
README.md.Proxmox-VE 7.3-4 uses an Ubuntu kernel and a Debian userland. Our install is Debian's 11.6 userland.
The current version of Go available within the Debian package system for Debian 11 is 1.15. However, the Go version available via Go Downloads is 1.19.5.
Some userland programs in Proxmox are different than in Debian. Proxmox maintains its own source repository. You can see, for example, that "gcc" does not appear in the Proxmox repository and that "qemu" appears several times. Previously, I noticed that Proxmox seemed to have the
apt
package management system well set up to issue warnings when there might be a conflict between a Debian install and the Proxmox version.Many people might consider it a bad practice to install considerable additional software on a Proxmox node. They might be right, especially if the node-installed software somehow conflicts with Proxmox. Well, MetalVPS is, as stated in the Warnings section of the OP, ephemeral.
Under all the circumstances, maybe the procedure to be followed might be:
install gcc and friends, including git
install the Go binary distribution
clone the Go sources
take a peek at the Go sources
compile and install Go from source (requires Go binary distribution or other options)
take a peek at RFC 4861
clone the
ndpresponder
sourcesread the
ndpresponder
sourcescompile, install, and configure
ndpresponder
make @yoursunny's Ubuntu container
see if everything works
More soon! 🔜 Any comments and suggestions will be greatly appreciated!
Does anyone else want an account?
Have a wonderful day!
MetalVPS
You don't need to install compilers on the host machine.
Instead, make an LXC container, install
build-essential
(provides C compiler) and Go 1.19, and compile ndpresponder binary.Then, copy the binary to the host machine, and setup systemd service.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Hello!
Our local genius boy @Not_Oles clicked a button that shutdown the server. One of those "Ooops, what did I just do?" moments.
Neither the Hetzner Robot control panel Ctrl+Alt+Del nor automatic hardware reset seemed to work to restart the server. The server also did not restart after Rescue System activation. I sent a ticket to Hetzner asking them to try a restart. They restarted the server in the rescue system.
All the containers except 101 respond to ping. 101 seems to have some traffic. So I think the situation is resolved. It is getting late here, so I am going to sleep now. Remaining issues, if any can be addressed in the morning.
Sorry! Hope I don't repeat this mistake!
Thanks and best wishes from the desert! 🏜️
Tom
MetalVPS
Hi @yoursunny! Okay, new container made, gcc installed.
Next up is installing Go.
Thanks again!
Tom
MetalVPS
I paste these commands to install Go on Debian / Ubuntu.
After initial installation, paste the top two lines to upgrade Go to latest version.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
Hi @yoursunny!
Thanks very much for sharing your commands to install Go! I had a great time reading your commands and comparing your commands with the officially suggested Go Download and Install commands. I read a few man pages, including the update-alternatives man page. I also did a few google searches. One interesting topic I read about was the differences between the use of "|| :" and "|| true".
Your curl command uses "$()" for Command Substitution. Because "curl" begins with the letter "c," and because -- just for fun -- your curl command might be considered ever so slightly "obfuscated," your command substitution with curl made me imagine changing "International Obfuscated C Code Contest" to "International Obfuscated Curl Code Contest."
The most recent winning entries on the IOCCC website seem to be from the 27th Contest in 2020. The most recent news entry is dated last month. I was glad to see "We plan to hold IOCCC28 in 2023." in the news entry dated 2021-12-27.
One thing I wondered about was whether your Go install commands presented above are part of a more comprehensive provisioning system that you use. Searching Google for "install vps site:yoursunny.com" led me to Install Ubuntu from ISO on IPv6-only KVM Server in SolusIO.
I haven't tried running your commands yet. Let me go (pun intentional) back up the container, then run your commands, and then post about what happened.
Thanks again for all the fun I had reading your Go install commands! 🌟
MetalVPS
Turns out root missed Endurance when she sailed. Had root been aboard Endurance, he could have changed the hostname of the "107-compiler" container to "antarctica" and added a user with sudo called "shackleton." The container's build user, shackleton, would be much better prepared for IPv9.
Below is a slightly sanitized version of what happened when I (almost) followed @yoursunny's suggested commands.
Next up might be running the
ndpresponder
install command:go install github.com/yoursunny/ndpresponder@latest
MetalVPS
@yoursunny What's next? Want to run any testing before moving the ndpresponder binary to the node?
MetalVPS
We have a few more free LXC containers available if anyone is interested. Something like
Node yabs at https://lowendspirit.com/discussion/comment/121790/#Comment_121790
Please consider the flexible requirements to (1) share a link to a post here on LES where you helped someone else answer a tech question and (2) either be willing to share your identity or tell us your good reason not to share it.
Best wishes from Sonora! 🚵
MetalVPS
Haha, I really did want to look at the ndpresponder source code.
Can it really be true that Github doesn't have IPv6?
IPv6 support for cloning Git repositories #10539
Whoa! Amazing! 😵
Sourcehut says they plan to include IPv6 as part of their EU rollout.
I took a quick look about Gitlab, but wasn't immediately clear whether Gitlab has full IPv6 functionality. Gitlab does seem to have an IPv6 response to
host
:MetalVPS
Hello!
Today is January 18, 2023. In a few days, around January 25, I expect to receive and pay the first billing for this server. This server is set to cancel on February 7, 2023. So February 7 is the last day that this server will be available unless the cancellation is revoked.
Hetzner has told me that they are okay with my keeping servers only for one month. Hetzner will be paid the full amount for the full month, so what is happening here is not the same as cancelling during their trial period where they receive nothing.
I am delighted that people are using the server! I would be glad to add more free accounts if people are interested. If you are interested, please read the OP and skim the thread before asking.
I think it would be very wonderful if guys using the server shared a little about what they are doing and about their setups inside their containers.
Prior to February 7, I hope to offer the guys presently using this server an opportunity to join a new server. Already I have added a second Hetzner server, and I might add a third. Additionally I am talking with another provider. Alternatively, I might simply revoke this server's pending cancellation.
This server has both 128 GB RAM and also a 16 TB HDD in addition to the 2 x 1 TB NVMe drives. If anybody might want to transfer this particular setup, please let me know.
Friendly greetings from Mexico and New York City!
Tom
MetalVPS
Unfortunately.
https://nat64.xyz/ seemes really useful for that case (just use 2001:67c:2960::64 as your nameserver and it will use the NAT64 service for IPv4-only services).
I didn't really want to change the system resolv.conf, so I wrote a simple wrapper script (using bubblewrap) to only use that resolver for certain processes:
Not sure I have done too much useful stuff yet... Mainly just figuring out what issues can arise from being IPv6 only. And I guess I am usually just too careful to build stuff as lean as possible so I can easily host it on my lowend VPSes.
Good catch! I knew about nat64.xyz from awhile back when there was some discussion here. I remember reading @yoursunny's blog post Enable IPv4 Access in EUserv IPv6-only VS2-free.
I was so shocked about Github not having IPv6 that I didn't even think about nat64.xyz. 😵
Very nice! I haven't seen bubblewrap before.
Here are some links for the curious:
https://github.com/containers/bubblewrap
https://wiki.archlinux.org/title/Bubblewrap
https://manpages.debian.org/testing/bubblewrap/bwrap.1.en.html
Thanks to @cmeerw and @yoursunny and many others for helpful comments. One of the things I love the best about LES is that people are so generous with helpful comments.
MetalVPS
@cmeerw I'm sure that you will build something useful and fun before too long!
By the way, I made your container before I figured out how to add the hard disk. Want me to add a few TB of HDD space? I think I have to reboot your container to add the drive. Or we can just leave everything peaceful if you don't need the HDD.
I imagined the guys here being excited about the HDD. All the time people seem to be looking for backup space. But there has been hardly anything said about the 16 TB of HDD on this free server.
The 16 TB HDD plus the increase from 64 to 128 GB RAM seems to raise the cost of the server by about $30/month.
MetalVPS
Hello!
It seems the server might have booted about four hours ago. What happened? When? Why?
From /var/log/messages:
Ideas, hints, and suggestions are very welcome!
The server seems to be working okay again.
MetalVPS
I only have about 50-55 GB of data I really need to back up (and 45 GB of that are just personal photos, the rest is the important stuff I spread around free storage plans, obviously encrypted).
Okay, thank you for kindly letting me know.
I asked Hetzner whether they might be willing to remove the HDD, and, if they did, how the price would change. We will see what Hetzner says.
MetalVPS
Hetzner said they are unaware of any incident which might have caused the server to reboot.
Proxmox now has a kernel update, and suggests that we consider a reboot. Right now it's Fri 20 Jan 2023 08:30:34 PM UTC. Will try the reboot in a moment. . . .
MetalVPS
Everything seems okay to me following the reboot. Please let us know if you see any issues. Thanks!
MetalVPS