i would like to setup 3 or 4 vms with 2nd hdd but has to be shared between them all.. i can live to lvm and one per vm is fine.... question is how to do that?
For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars! Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).
I believe you can edit /etc/pve/qemu-server/xxx.conf (E.g. 100 or whatever your VM ID is) and manually include the disk name of the disk used in another machine.
Don't know, though. Apologies if I mislead you. Just an idea that I have never tested.
You could always setup multipath on the host node and have it run as a pseudo san then you would be able to present the targets to all the VMs simultaneously I guess.
I don't understand how this can work. scsi devices expect to have a single host. won't multiple ones clobber each other? I think you want nfs or similar, if you really want the vm's to see a shared file system. If you want them to each see their own disk, use whatever the usual libvirt thing is for that (hmm maybe it's iscsi per partition or so?).
while not officially supported, you can use 9p: http://www.linux-kvm.org/page/9p_virtio
if I remember correctly there is even a way to add it to the conf files of the VMs...
as other pointed out, the best alternative though would be an nfs-server running on the host where the guests simply can connect to.
while not officially supported, you can use 9p: http://www.linux-kvm.org/page/9p_virtio
if I remember correctly there is even a way to add it to the conf files of the VMs...
as other pointed out, the best alternative though would be an nfs-server running on the host where the guests simply can connect to.
Comments
sshfs?
For domain registrations, create an account at Dynadot (ref) and spend $9.99 within 48 hours to receive $5 DynaDollars!
Looking for cost-effective Managed/Anycast/DDoS-Protected/Geo DNS Services? Try ClouDNS (aff).
personally I would just make 1 more VM, give that the shared drive volume/image, and use either:
iscsi
simba
sshfs
nfs
depends on why you want the share and what limitations you have in terms of requirements.
trying to directly mount the same physical image between numerous VM's is not going to work well I imagine.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
I believe you can edit
/etc/pve/qemu-server/xxx.conf
(E.g. 100 or whatever your VM ID is) and manually include the disk name of the disk used in another machine.Don't know, though. Apologies if I mislead you. Just an idea that I have never tested.
iscsi for me is the fastest and i have been experimenting with it... i have read its not mission critical but has been around for a while.
not for my use casee.
You could always setup multipath on the host node and have it run as a pseudo san then you would be able to present the targets to all the VMs simultaneously I guess.
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
i think i will setup one large iscsi on same node and share it ... interesting how one have to go round limitations.
this i might add as a write up in your wiki for sure hopefully will be good enough for others.
I don't understand how this can work. scsi devices expect to have a single host. won't multiple ones clobber each other? I think you want nfs or similar, if you really want the vm's to see a shared file system. If you want them to each see their own disk, use whatever the usual libvirt thing is for that (hmm maybe it's iscsi per partition or so?).
I was going to deal with this same problem in coming week.
I would have thought NFSv4 over a private bridge (wireguard optional?)
Bind mount
The all seeing eye sees everything...
I would setup an extra vm and run FreeNAS or similar on it and share the storage from there via iscsi of nfs, depending on requirements.
will only work for containers not VM guests.
while not officially supported, you can use 9p: http://www.linux-kvm.org/page/9p_virtio
if I remember correctly there is even a way to add it to the conf files of the VMs...
as other pointed out, the best alternative though would be an nfs-server running on the host where the guests simply can connect to.
You're damn right! Silly me
The all seeing eye sees everything...