proxmox :::: how to share storage between VMs

ehabehab Content Writer

hi,

i would like to setup 3 or 4 vms with 2nd hdd but has to be shared between them all.. i can live to lvm and one per vm is fine.... question is how to do that?

any help/hints appreicated.

thanks and hugs ... oh wait just thanks

ehab

Comments

  • sshfs?

    Thanked by (1)ehab

    For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars!
    Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).

  • InceptionHostingInceptionHosting Hosting ProviderOG
    edited September 2020

    personally I would just make 1 more VM, give that the shared drive volume/image, and use either:

    iscsi
    simba
    sshfs
    nfs

    depends on why you want the share and what limitations you have in terms of requirements.

    trying to directly mount the same physical image between numerous VM's is not going to work well I imagine.

    Thanked by (1)ehab

    https://inceptionhosting.com
    Please do not use the PM system here for Inception Hosting support issues.

  • lentrolentro Hosting Provider

    I believe you can edit /etc/pve/qemu-server/xxx.conf (E.g. 100 or whatever your VM ID is) and manually include the disk name of the disk used in another machine.

    Don't know, though. Apologies if I mislead you. Just an idea that I have never tested.

    Thanked by (1)ehab
  • ehabehab Content Writer

    iscsi for me is the fastest and i have been experimenting with it... i have read its not mission critical but has been around for a while.

    Thanked by (1)vimalware
  • ehabehab Content Writer

    @thedp said:
    sshfs?

    not for my use casee.

    Thanked by (1)vimalware
  • InceptionHostingInceptionHosting Hosting ProviderOG

    You could always setup multipath on the host node and have it run as a pseudo san then you would be able to present the targets to all the VMs simultaneously I guess.

    https://inceptionhosting.com
    Please do not use the PM system here for Inception Hosting support issues.

  • ehabehab Content Writer

    i think i will setup one large iscsi on same node and share it ... interesting how one have to go round limitations.

    this i might add as a write up in your wiki for sure :) hopefully will be good enough for others.

  • I don't understand how this can work. scsi devices expect to have a single host. won't multiple ones clobber each other? I think you want nfs or similar, if you really want the vm's to see a shared file system. If you want them to each see their own disk, use whatever the usual libvirt thing is for that (hmm maybe it's iscsi per partition or so?).

    Thanked by (1)vimalware
  • edited September 2020

    I was going to deal with this same problem in coming week.

    I would have thought NFSv4 over a private bridge (wireguard optional?)

  • Bind mount

    The all seeing eye sees everything...

  • I would setup an extra vm and run FreeNAS or similar on it and share the storage from there via iscsi of nfs, depending on requirements.

    Thanked by (2)ehab vimalware
  • @terrorgen said:
    Bind mount

    will only work for containers not VM guests.

    while not officially supported, you can use 9p: http://www.linux-kvm.org/page/9p_virtio
    if I remember correctly there is even a way to add it to the conf files of the VMs...

    as other pointed out, the best alternative though would be an nfs-server running on the host where the guests simply can connect to.

    Thanked by (2)ehab vimalware
  • @Falzo said:

    @terrorgen said:
    Bind mount

    will only work for containers not VM guests.

    while not officially supported, you can use 9p: http://www.linux-kvm.org/page/9p_virtio
    if I remember correctly there is even a way to add it to the conf files of the VMs...

    as other pointed out, the best alternative though would be an nfs-server running on the host where the guests simply can connect to.

    You're damn right! Silly me

    The all seeing eye sees everything...

Sign In or Register to comment.