MetalVPS Labs Server Tech Thread

Not_OlesNot_Oles Hosting ProviderContent Writer
edited November 2021 in Help

Please post anything and everything directly or indirectly related to MetalVPS Labs Server Tech! :)

MetalVPS Labs Server Tech topics currently include:

  • differences in randomness on bare metal and in VPSes

  • compiling and configuration of Qemu -- @Not_Oles

  • server network configuration for WAN VPS access -- @Not_Oles

  • adapting ikiwiki for research notes and neighborly chat

  • self-compiling a minimal Linux distribution directly from bleeding edge upstream source control -- @Not_Oles

Responses and questions from anybody in the LES community are welcome. You do not have to be a MetalVPS Labs Neighbor in order to participate here. It's okay to ask for help here. It's okay if you are a beginner. :)


MetalVPS Labs presently includes a bare metal server and a group of VPSes from various providers. The bare metal server runs a rolling release current GNU/Linux distribution. So far, the VPSes have been running stable release versions of a different distribution. The specific distributions are not mentioned here because of a desire to focus on the Linux kernel and upstream software, especially features common to all distributions.


Is this a "help thread" in the "Technical" category? Or a "tech thread" in the "Help" category? :)

Thanks to @Chievo for the idea to create this thread and for suggesting that "Yes for sure many people would find it extremely interesting ;)"


๐Ÿ”œ @Not_Oles might post about booting the Debian sid daily nocloud image on bleeding edge qemu. . . .

MetalVPS Labs Neighbors who wish to introduce the LES Community to their projects are encouraged to post introductions and progress reports.

Comments

  • @Not_Oles said:

    MetalVPS Labs Server Tech topics currently include:
    [...]
    differences in randomness on bare metal and in VPSes
    adapting ikiwiki for research notes and neighborly chat

    fun stuff indeed, appreciating the comfortable computing experience!

    Will be interested to learn more about the QEMU vm and network voodoo as well ...

    Thanked by (1)Not_Oles

    HS4LIFE (+ (* 3 4) (* 5 6))

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited November 2021

    How to Launch Your Own KVM Virtual Machines On Almost Any Linux Distribution and on Almost Any Laptop or VPS

    Context

    Linux distributions seem very different from each other. The install experiences seem very different, the default interfaces seem very different, the package systems seem very different, the update experiences and update schedules seem very different, and system and network configuration utilities seem very different. Switching from one distribution to another can seem almost impossibly difficult because it seems like one has to "start over."

    Yet, when we look underneath, we find that all distributions have a great deal in common. All of the distributions are based on the same Linux kernel, despite distribution specific configurations and tweaks. Almost all of the distributions also have very similar basic sets of utilities and libraries.

    A lot of fun can be had by abstracting away the differences in distributions and leveraging their underlying simularities! As but one example, here below might be a distribution independent way to launch your own KVM virtual machines on almost any Linux distribution. As far as I am aware, most people should be able to run the following steps in almost any Linux on almost any laptop, desktop, or VPS.

    The example below uses the default qemu networking setup, which is called "slirp." Slirp uses the host's IP address, so the setup here will work on systems which do not have multiple IP addresses. Although not covered here, slirp can handle port forwarding so that independent ssh and https access to the VM may be arranged. Although not covered in the command line VM access discussed here, qemu also provides convenient facilities for graphical access to the VMs. When using the command line access to VMs as described here, it's helpful to launch the VM inside tmux or GNU screen. That way, it's easy to hop in and out of the VM and even logout of the host while the VM remains running continuously.

    Using the bleeding edge software in the example here should be fine as long as the system is not used for production. It's an excellent idea not to try this on a system running needed services! At a minimum, before starting, please make double extra sure that everything you need is backed up redundantly. Please also be double extra sure that you actually can restore from the backup.

    The steps here require development tools to be installed according to the distribution's package system.


    Downloading the Debian sid Daily Image

    # wget https://cloud.debian.org/images/cloud/sid/daily/latest/debian-sid-nocloud-amd64-daily.qcow2
    

    Hint: The default login for this image is root with no password, just hit Enter.


    Below we can see quotes from posts I made to the very fun MetalVPS' ikiwiki chat system as qemu was getting started. There are a dozen or so messages in our very fun chat system every day! :)


    Downloading, Compiling, and Installing Qemu

    Steps for compiling the git default qemu on darkstar:
    root@darkstar:~# cat compiling-qemu 
    # Please see instructions to download and build from git:
    # https://www.qemu.org/download/
    cd /usr/local/src
    git clone https://gitlab.com/qemu-project/qemu.git
    cd qemu
    git submodule init
    git submodule update --recursive
    cd ..
    mkdir qemu-obj ## Will move to memory based filesystem for compiling.
    tmux
    cd qemu-obj
    time ../qemu/configure
    time make -j 8  ## Half of available cores. But more cores aren't necessarily faster.
    
    real    21m59.189s  ## Not bad for an antique! :)
    user    162m23.734s
    sys     12m13.914s
    echo $?
    0
    make install
    root@darkstar:~# 
    

    Getting Slirp to Work

    • 20211114 16:10 MST

    I got slirp to work with this command:

    /usr/local/bin/qemu-system-x86_64 \
    -nographic \
    -cpu host -enable-kvm \  ## requires root or membership in `kvm` group not yet setup on darkstar (SBo install allows kvm access to group `users`)
    -m 4G \  ## Seems to work with 1G
    -hda debian-sid-nocloud-amd64-daily.qcow2
    

    What I was trying, which did NOT work, was an additional line

    -net user \
    

    This borks the network because, if I understand rightly, networking in qemu requires specifying both a front end (the emulated network interface) and a back end (the connection to the host). If I specify either the front end or the back end without the other, the network is broken. In the special case that neither the front end nor the back end are specified, qemu falls back to its default, slirp, which is equivalent to

    -netdev user,id=user.0 \
    -device e1000,netdev=user.0 \
    

    So, my single extra line was NOT the equivalent of the default.

    Please see explanation at http://www.linux-kvm.org/page/Networking.

    I am not sure my explanation is correct. I am just learning this stuff.

    -tom


    Boot delay inside qemu at "loading initial ramdisk"

    20211114 13:49 MST

    Qemu + debian sid daily from yesterday is running

    Linux debian 5.14.0-4-amd64 #1 SMP Debian 5.14.16-1 (2021-11-03) x86_64
    

    whereas darkstar is running 5.15.2. Guessing that the 52 second boot delay at "loading initial ramdisk" might be because qemu has to make the initial ramdisk rather than using the host's pre-made version? Before, on the AX101, it was running sid and so maybe the kernel versions were the same inside qemu and outside? I wasn't consciously trying to use the host's kernel inside qemu. But maybe I could do that now on darkstar?


    Afterward

    Right now I still am trying to figure out that boot delay. Any tips for the cluelessโ„ข guy?

    One possible next step might be to try setting up networking so that VMs have access to the server's extra IP addresses. We managed to get extra IPs working on MetalVPS' former AX101, so the current server probably also will work. There are also several qemu performance tweaks which can be implemented.

    Maybe we have succeeded in catching a glimpse into how bleeding edge qemu might run bleeding edge Debian on just about any Linux distribution? If you try this and seem to have trouble, please feel free to post here in this thread or start another thread. The great guys here at LES will help you!

    Something else I am trying to figure out is where MetalVPS might be going. I have received positive feedback about the "MetalVPS Labs" concept, so I am continuing to explore that concept despite wondering if, by some definitions, posts like this might not qualify as "lab research." Nevertheless, even if all that happens is some Spirited Low End fun, I am more than satisfied! Happy LowEnding everybody!

    Best wishes and kindest regards! ๐ŸŒŽ๐ŸŒ

    Thanked by (4)Chievo _MS_ uptime saibal
  • @Not_Oles said:

    How to Launch Your Own KVM Virtual Machines On Almost Any Linux Distribution and on Almost Any Laptop or VPS

    Context

    Linux distributions seem very different from each other. The install experiences seem very different, the default interfaces seem very different, the package systems seem very different, the update experiences and update schedules seem very different, and system and network configuration utilities seem very different. Switching from one distribution to another can seem almost impossibly difficult because it seems like one has to "start over."

    Yet, when we look underneath, we find that all distributions have a great deal in common. All of the distributions are based on the same Linux kernel, despite distribution specific configurations and tweaks. Almost all of the distributions also have very similar basic sets of utilities and libraries.

    A lot of fun can be had by abstracting away the differences in distributions and leveraging their underlying simularities! As but one example, here below might be a distribution independent way to launch your own KVM virtual machines on almost any Linux distribution. As far as I am aware, most people should be able to run the following steps in almost any Linux on almost any laptop, desktop, or VPS.

    The example below uses the default qemu networking setup, which is called "slirp." Slirp uses the host's IP address, so the setup here will work on systems which do not have multiple IP addresses. Although not covered here, slirp can handle port forwarding so that independent ssh and https access to the VM may be arranged. Although not covered in the command line VM access discussed here, qemu also provides convenient facilities for graphical access to the VMs. When using the command line access to VMs as described here, it's helpful to launch the VM inside tmux or GNU screen. That way, it's easy to hop in and out of the VM and even logout of the host while the VM remains running continuously.

    Using the bleeding edge software in the example here should be fine as long as the system is not used for production. It's an excellent idea not to try this on a system running needed services! At a minimum, before starting, please make double extra sure that everything you need is backed up redundantly. Please also be double extra sure that you actually can restore from the backup.

    The steps here require development tools to be installed according to the distribution's package system.


    Downloading the Debian sid Daily Image

    # wget https://cloud.debian.org/images/cloud/sid/daily/latest/debian-sid-nocloud-amd64-daily.qcow2
    

    Hint: The default login for this image is root with no password, just hit Enter.


    Below we can see quotes from posts I made to the very fun MetalVPS' ikiwiki chat system as qemu was getting started. There are a dozen or so messages in our very fun chat system every day! :)


    Downloading, Compiling, and Installing Qemu

    Steps for compiling the git default qemu on darkstar:
    root@darkstar:~# cat compiling-qemu 
    # Please see instructions to download and build from git:
    # https://www.qemu.org/download/
    cd /usr/local/src
    git clone https://gitlab.com/qemu-project/qemu.git
    cd qemu
    git submodule init
    git submodule update --recursive
    cd ..
    mkdir qemu-obj ## Will move to memory based filesystem for compiling.
    tmux
    cd qemu-obj
    time ../qemu/configure
    time make -j 8  ## Half of available cores. But more cores aren't necessarily faster.
    
    real    21m59.189s  ## Not bad for an antique! :)
    user    162m23.734s
    sys     12m13.914s
    echo $?
    0
    make install
    root@darkstar:~# 
    

    Getting Slirp to Work

    • 20211114 16:10 MST

    I got slirp to work with this command:

    /usr/local/bin/qemu-system-x86_64 \
    -nographic \
    -cpu host -enable-kvm \  ## requires root or membership in `kvm` group not yet setup on darkstar (SBo install allows kvm access to group `users`)
    -m 4G \  ## Seems to work with 1G
    -hda debian-sid-nocloud-amd64-daily.qcow2
    

    What I was trying, which did NOT work, was an additional line

    -net user \
    

    This borks the network because, if I understand rightly, networking in qemu requires specifying both a front end (the emulated network interface) and a back end (the connection to the host). If I specify either the front end or the back end without the other, the network is broken. In the special case that neither the front end nor the back end are specified, qemu falls back to its default, slirp, which is equivalent to

    -netdev user,id=user.0 \
    -device e1000,netdev=user.0 \
    

    So, my single extra line was NOT the equivalent of the default.

    Please see explanation at http://www.linux-kvm.org/page/Networking.

    I am not sure my explanation is correct. I am just learning this stuff.

    -tom


    Boot delay inside qemu at "loading initial ramdisk"

    20211114 13:49 MST

    Qemu + debian sid daily from yesterday is running

    Linux debian 5.14.0-4-amd64 #1 SMP Debian 5.14.16-1 (2021-11-03) x86_64
    

    whereas darkstar is running 5.15.2. Guessing that the 52 second boot delay at "loading initial ramdisk" might be because qemu has to make the initial ramdisk rather than using the host's pre-made version? Before, on the AX101, it was running sid and so maybe the kernel versions were the same inside qemu and outside? I wasn't consciously trying to use the host's kernel inside qemu. But maybe I could do that now on darkstar?


    Afterward

    Right now I still am trying to figure out that boot delay. Any tips for the cluelessโ„ข guy?

    One possible next step might be to try setting up networking so that VMs have access to the server's extra IP addresses. We managed to get extra IPs working on MetalVPS' former AX101, so the current server probably also will work. There are also several qemu performance tweaks which can be implemented.

    Maybe we have succeeded in catching a glimpse into how bleeding edge qemu might run bleeding edge Debian on just about any Linux distribution? If you try this and seem to have trouble, please feel free to post here in this thread or start another thread. The great guys here at LES will help you!

    Something else I am trying to figure out is where MetalVPS might be going. I have received positive feedback about the "MetalVPS Labs" concept, so I am continuing to explore that concept despite wondering if, by some definitions, posts like this might not qualify as "lab research." Nevertheless, even if all that happens is some Spirited Low End fun, I am more than satisfied! Happy LowEnding everybody!

    Best wishes and kindest regards! ๐ŸŒŽ๐ŸŒ

    Cool! I have been looking into figuring out how to get a few virtual servers setup on my dedicated server. This should work on the stable version of Debian, not just sid?

    Thanked by (1)Not_Oles
  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @usr123 said: This should work on the stable version of Debian, not just sid?

    Sure! As long as you are not running production, why not?

    Unless you are interested in the compiling, you could just install qemu with apt and then the slirp should be fine.

    Please don't forget the preliminary backups!

    Best wishes and kindest regards! :)

  • Not_OlesNot_Oles Hosting ProviderContent Writer
    edited November 2021

    What the world needs is a new Linux distribution!

    Did you guys ever do linuxfromscratch (LFS)?

    I got LFS started a couple of times, but never finished.

    One of LFS' goals is to meet the whatever-its-called-i-forgot standard for running commercial applications. Also LFS doesn't seem always to use the very latest upstream releases.

    I wondered a few years ago about what would happen if a guy tried to build a minimal Linux system with everything self-compiled right out of source code management.

    Part of the answer about what would happen is compiling doesn't always work at the most basic level, because some projects seem to leave their source code repository in an unbuildable state for days at a time.

    Nevertheless, many times, I did get a few things compiled and to the point where those few things could rebuild themselves. Maybe a little more fun and I could have a bootable system? Which might be a great way for me to learn a little about booting.

    Some people like building a new system chrooted into a separate partition -- less risk of accidentally overwriting the main system. This is why the MetalVPS antique server, darkstar, has an /altroot primary partition. . . .

    Best wishes and kindest regards! โ™’๏ธŽโ™’๏ธŽ

  • edited November 2021

    Built one LFS system some 15+ years ago because I had too much free time.
    Without knowing much what I was doing, the building process felt like a chore of copying commands from the manual.

    And once I am done i don't have a use case for it and it was quicky replaced by a more mainstream distro.

    Not repeating it anymore.

    I think the only thing I remembered learning from the process is the concept of chroot.

    Thanked by (1)Not_Oles

    The all seeing eye sees everything...

  • Not_OlesNot_Oles Hosting ProviderContent Writer

    @terrorgen said:
    Built one LFS system some 15+ years ago because I had too much free time.

    Would not most everyone say that they would love to arrange such a delightful state of affairs as having too much free time? :)

    Without knowing much what I was doing, the building process felt like a chore of copying commands from the manual.

    I remember the feeling of "chore copying commands from the manual." So I made a little script. Even though LFS has its own autobuild system. I just wanted a little, basic thing instead of parsing the LFS book.

    And once I am done i don't have a use case for it and it was quicky replaced by a more mainstream distro.

    Haha, I never finished building much more than the build system. And I too have since been using more mainstream distros.

    I think the only thing I remembered learning from the process is the concept of chroot.

    This! I think I knew about chroot before I started on LFS. And I had built NetBSD-current on multiple architectures 42 zillion times. :) But there's not so much one needs to do to build NetBSD-current. NetBSD is well structured and kinda just works, including building itself. On Linux, I never had tried to construct a cross platform Linux build system. I remember looking around and finding only a little guidance. So I am especially grateful to LFS for teaching me their way of making a Linux self-build platform. This was maybe 15 years ago. :) Good times! :)

    Friendly greetings from New York City and Sonora! ๐Ÿ—ฝ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡ฒ๐Ÿ‡ฝ๐Ÿœ๏ธ

    Thanked by (1)terrorgen
  • @Not_Oles said:
    Would not most everyone say that they would love to arrange such a delightful state of affairs as having too much free time? :)

    Ha. Ha.

    Thanked by (1)Not_Oles

    The all seeing eye sees everything...

Sign In or Register to comment.