In part 1, I talked about the server side of Logical Domains. This time, I'll cover how I set up a guest domain.
First, I have created a zfs pool called storage on the host, and I'm going to present a zvol (or maybe several) to the guests.
I'm going to create a little domain called ldom1.
ldm add-domain ldom1
ldm set-core 1 ldom1
ldm add-memory 4G ldom1
ldm add-vnet vnet1 primary-vsw0 ldom1
Now create and add a virtual disk. Create a dataset for the ldom and a 24GB volume inside it, add it to the storage service and set it as the boot device.
zfs create storage/ldom1
zfs create -V 24gb storage/ldom1/disk0
ldm add-vdsdev /dev/zvol/dsk/storage/ldom1/disk0 ldom1_disk0@primary-vds0
ldm add-vdisk disk0 ldom1_disk0@primary-vds0 ldom1
ldm set-var auto-boot\?=true ldom1
ldm set-var boot-device=disk0 ldom1
Then bind resources, list, and start:
ldm bind-domain ldom1
ldm list-domain ldom1
ldm start-domain ldom1
You can connect to the console by looking at the number under CONS in the list-domain output:
# ldm list-domain ldom1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldom1 bound ------ 5000 8 4G
# telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connecting to console "ldom1" in group "ldom1" ....
Press ~? for control options ..
T5140, No Keyboard
Copyright (c) 1998, 2014, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.6.e, 4096 MB memory available, Serial #83586105.
Ethernet address 0:14:4f:fb:6c:39, Host ID: 84fb6c39.
Boot device: disk0 File and args:
Bad magic number in disk label
ERROR: /virtual-devices@100/channel-devices@200/disk@0: Can't open disk label package
ERROR: boot-read fail
Evaluating:
Can't open boot device
{0} ok
Attempt to boot it off the net and it says:
Requesting Internet Address for 0:14:4f:f9:87:f9
which doesn't match the MAC address in the console output. So if you want to jumpstart the box, and that's easy enough, you need to do a 'boot net' first to get the actual MAC address of the LDOM to add it to your jumpstart server.
To add an iso image to boot from:
ldm add-vdsdev options=ro /var/tmp/img.iso iso@primary-vds0
ldm add-vdisk iso iso@primary-vds0 ldom1
At the ok prompt, you can issue the 'show-disks' command to see what disk devices are present. To boot off the iso:
boot /virtual-devices@100/channel-devices@200/disk@1:f
And it should work. This is how I've been testing the Tribblix images for SPARC, by the way.
4 comments:
Note that you lose a lot of performance when you use ZFS volume-backed vdisks with LDoms. This is still a concern even with the recent improvements in vdisk performance. Use entire LUNs or disks instead when you need the best performance.
Well yes, but if you just have a small number of internal drives to play with, and you're looking at setting up a relatively large number of guest domains, then the flexibility you get with ZFS (including cloning) easily outweighs any potential performance loss. For development and testing, the added convenience is a huge win; production use would require a little more architecture.
True, sometimes it's your only choice and it is quick and easy to clone environments if you're prototyping. But bear in mind that people often choose LDoms over zones because of the extra isolation between instances, and using ZFS volumes as vdisks will defeat the purpose by creating a painful shared bottleneck between all instances (e.g. I've seen IO response times of >1000ms in guest LDoms from this). If you just follow the official docs then you'll have no idea of the performance cost of this approach.
Some tips: If you don't care about the data integrity, use "zfs set sync=disabled" on the zvol to boost performance. If you change your mind and want to go with whole disks, zpool attach/detach from within the guest domain can get you there relatively easily.
use options=slice like below
ldm add-vdsdev options=slice /dev/zvol/dsk/storage/ldom1/disk0 ldom1_disk0@primary-vds0
Post a Comment