2013. december 28., szombat

SuSE Linux Enterprise Server: SLES containers with LDAP (TLS) authentication.

SuSE Linux Enterprise Server: SLES containers with LDAP (TLS) authentication.

SLES is free to download and free to use. Sign up at suse.com, the trial code provides updates for 60 days.


I.) OS level vs HW level virtualization

What is LXC?
LinuX Container provides OS level virtualization. It's similar to Linux OpenVZ, FreeBSD Jail, Solaris Zone, AIX WPAR.

What are the differences between HW level and OS level virtualization?
In a nutshell:
- HW level virtualization emulates computer hardware (a guest virtual machine), the guest OS runs on this emulated hardware, and thinks it's running on a real machine.
- OS level virtualization doesn't use emulated hardware, but uses a special environment with own process management, network space etc. For example, the guest's file systems are chrooted directories on the host OS.
For full specifications you can search the web.

What are LXC's advantages, why to use it?
Extremely fast to create new containers. A full SLES installation takes 1-2 minutes without user actions. If you create more containers the next ones will be created from RAM cache in 10-20 seconds.
Extremely fast to stop/start/reboot containers. The boot up time is 3-6 seconds (including system daemons, such as sshd etc.), the shutdown process takes 1-2 seconds.
Dynamic resource usage, especially the RAM management is amazing.
You can access/modify/backup your guest's filesystems directly, as these FSs are directories on the host OS. Quicker troubleshooting.

Is LXC secure?
It's more secure than it was, but not so secure than a hypervisor provided environment. If you need very strong security then use hypervisor instead of LXC.

What can I use as client OS?
Various Linux distributions.

Which one should I use?
HW virt.: If you need 100% isolated guests, mixed guest operating systems (Linux, Windows, BSD) on the same hypervisor, improved security. For example: a company uses Windows AD, Oracle databases on Oracle Linux instances, and SLESs with middleware products.
OS virt.: If you need resource/cost- effective (cheaper) environment, Linux guests only, homogeneous environment. For example: web hosting with additional services, or a middleware farm with development, test and production environments.

Both have their own advantages/disadvantages, choose the one that suits your needs.


II.) Install LXC on SuSE Linux Enterprise Server

My base system is a 64 bit SuSE Linux Enterprise Server 11SP3, it's a default installation with these packages:
Base System
AppArmor
32-bit Runtime Env.
Help and Support Doc.
Minimal System (Appliances)

This is a virtual machine. I've limited hardware resources (5GB RAM and 30GB free disk space) for my purposes, but I need 4 independent SLES to test a lot of middleware stuffs.
5GB RAM and 30GB disk for 4 virtual machines, it doesn't sound too good. So I created one VM with three LXCs, I assigned 5GB RAM and 30GB disk to this VM. The VM and the three LXCs are sharing the resources.
The other reason is that I've never used LXC, I wanted to try this interesting stuff.

This is my virtual machine:
Linux vm 3.0.93-0.8-default #1 SMP Tue Aug 27 08:44:18 UTC 2013 (70ed288) x86_64 x86_64 x86_64 GNU/Linux

The memory consumption of the VM and the three running LXCs. Four running SLES 11SP3 with system daemons:
             total       used       free     shared    buffers     cached
Mem:          4906        464       4441          0         21        281
-/+ buffers/cache:        194       4711
Swap:          127          0        127

Not bad. This default install scenario takes 1493 MBytes on the root filesystem.

Filesystem              1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-rootlv      4032  1493      2335  39% /

Install LXC, setup the network bridge

zypper install lxc yast2-lxc bridge-utils
yast -> Network Devices -> Network Settings
Add -> Device Type: Bridge -> Next
Statically assigned IP Address: 192.168.2.2, Subnet mask: 255.255.255.0 # The same IP address as eth0 currently has.
Bridged Devices: [x] eth0 - Ethernet Card 0 (configured), Continue, Quit.

vm:~ # ifconfig -a
br0       Link encap:Ethernet  HWaddr 52:54:00:5F:8D:F7
          inet addr:192.168.2.2  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:76 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4595 (4.4 Kb)  TX bytes:26814 (26.1 Kb)

eth0      Link encap:Ethernet  HWaddr 52:54:00:5F:8D:F7
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2074 errors:0 dropped:302 overruns:0 frame:0
          TX packets:1313 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1398189 (1.3 Mb)  TX bytes:406608 (397.0 Kb)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)

And the active ssh connection hasn't terminated. :-)

Also I prepared /etc/hosts on vm:
127.0.0.1     localhost
192.168.2.2   vm
192.168.2.3   lxc1
192.168.2.4   lxc2
192.168.2.5   lxc3

yast -> Security and Users -> Firewall -> Interfaces -> br0, eth0: Internal Zone (intranet server, all ports are open)

The main lxc directory is /lxc here, the containers will be installed in /lxc/container_name1|2|3 etc. /lxc can be anything else,
the original (and default) /var/lib/lxc/ or /srv are good as well. I created LVM based filesystems for my containers.
mkdir /lxc && cd /lxc && mkdir lxc1 lxc2 lxc3

yast -> System -> Partitioner -> Volume Management

Now:
││Device          │     Size│F│Enc│Type     │FS Type│Label│Mount Point│Mount by│Used by│M│  
││/dev/vmvg       │ 29.88 GB│ │   │LVM2 vmvg│       │     │           │        │       │L│  
││/dev/vmvg/homelv│128.00 MB│ │   │LV       │Ext3   │     │/home      │Kernel  │       │ │  
││/dev/vmvg/rootlv│  4.00 GB│ │   │LV       │Ext3   │     │/          │Kernel  │       │ │  
││/dev/vmvg/swaplv│128.00 MB│ │   │LV       │Swap   │     │swap       │Kernel  │       │ │  
││/dev/vmvg/tmplv │  1.00 GB│ │   │LV       │Ext3   │     │/tmp       │Kernel  │       │ │

Add -> Logical Volume -> Name: lxc1lv -> Type: (x) Normal Volume -> (x) Custom Size, Size: 5GB, Stripes: Number 1
(x) Format partition, File System: ext3, (x) Mount partition, Mount Point: /lxc/lxc1, Finish

Create 2 more FSs in the same way, use lxc2 and lxc3. Later you can modify these FSs, explore LVM! ;-)

vm:~ # df -m /lxc/*
Filesystem              1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-lxc1lv      5040   139      4646   3% /lxc/lxc1
/dev/mapper/vmvg-lxc2lv      5040   139      4646   3% /lxc/lxc2
/dev/mapper/vmvg-lxc3lv      5040   139      4646   3% /lxc/lxc3

Whoops, /tmp is too big.
umount /tmp
Go back to LVM module in yast.
Select: │/dev/vmvg/tmplv │  1.00 GB│ │   │LV          │Ext3   │     │/tmp *        │Kernel  │   │
Press Enter on it, Resize, (x) Custom Size, size: 512MB, Next, Finish, Quit.

That's all, yast remounted it automatically, now it has the size I wanted:

vm:~ # df -m /tmp
Filesystem             1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-tmplv       504    33       446   7% /tmp


III.) Creating containers

Unfortunately there is no option to change the default path or destinations of the newly created containers if you create these containers via Yast's LXC module.
But no problem, the original template can be modified. Note: there are templates to install Debian, Ubuntu etc. on containers.

cd /usr/share/lxc/templates
cp lxc-sles lxc-sles.orig
cat lxc-sles | grep '$path'
sed -i 's/\$path/\/lxc\/lxc1/g' lxc-sles
cat lxc-sles | grep '/lxc/lxc1'

yast -> Miscellaneous -> Lxc -> Create
Name: lxc1, Template: sles
IP address: 192.168.2.3, Subnet: /24, Bridge: br0
Root Password: 1234, Repeat Password: 1234
# Choose strong password, 1234 is an example.
If you want to replace a container with a new one using the same (existing) name, then destroy it, create a new one and choose [Replace With New].

A tip, run yast -> Software -> Online Update before creating a SLES container, in this case the OS will be created with the newest packages.

Connect to your newly created SLES container:
lxc-console --name lxc1 # Use this if you cannot connect via SSH.
ssh root@lxc1 # Start with some housekeeping, create an admin user, set up sudo and disable root from SSH. See later.

You can create more containers in the same way:

cd /usr/share/lxc/templates
# We have a backup named lxc-sles.orig
cp lxc-sles.orig lxc-sles
cat lxc-sles | grep '$path'
sed -i 's/\$path/\/lxc\/lxc2/g' lxc-sles
cat lxc-sles | grep '/lxc/lxc2'

yast -> Miscellaneous -> Lxc -> Create
Name: lxc2, Template: sles
IP address: 192.168.2.4, Subnet: /24, Bridge: br0
Root Password: 1234, Repeat Password: 1234

cd /usr/share/lxc/templates
# We have a backup named lxc-sles.orig
cp lxc-sles.orig lxc-sles
cat lxc-sles | grep '$path'
sed -i 's/\$path/\/lxc\/lxc3/g' lxc-sles
cat lxc-sles | grep '/lxc/lxc3'

yast -> Miscellaneous -> Lxc -> Create
Name: lxc3, Template: sles
IP address: 192.168.2.5, Subnet: /24, Bridge: br0
Root Password: 1234, Repeat Password: 1234


IV.) The containers exist, first steps.

After I logged in to my lxc1 (as root ...yet):
lxc1:~ # df -m /
Filesystem              1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-lxc1lv      5040   926      3859  20% /

/dev/mapper/vmvg-lxc1lv is /lxc/lxc1 on the host system, and the root (/) FS inside the container.

As your host SLES is already registered, and the container's SLES is a clone, no need to register this one.
But the original installation media should be removed as a software source.
yast -> Software -> Software Repositories, select:
99 (Default)│   x   │           │SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138│cd:///?devices=/dev/sr0
then [ ] Enabled # Disable it.

Go to yast's "Software Management", at the "Search Phrase" type in sudo and press enter, it offers 2 packages:
sudo      │Execute some commands as root│1.7.6p2     │           │     1.2 MiB│
yast2-sudo│YaST2 - sudo configuration   │2.17.3      │           │   164.0 KiB│

If you will be dealing with apps that have X11 frontend, I suggest you to install xterm package, it's perfect to test whether X forwarding is ok or not.

Select them by the space key and press "Accept", also accept the dependencies.

Go to yast -> Security and Users -> User and Group Management -> Add
Add a user here, I chose lxc1op (and lxc2op, lxc3op for the other two).

There is a yast module for sudo, but use visudo this time.
Find this line: Defaults env_keep = "LANG LC_ADDRESS ...
And add DISPLAY to the start of the line, see:
Defaults env_keep = "DISPLAY LANG LC_ADDRESS
This needs if you want to run apps with X frontend through sudo (or after a sudo su -).

Find these lines:
Defaults targetpw   # ask for the password of the target user i.e. root
ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!
And disable them, see:
# Defaults targetpw   # ask for the password of the target user i.e. root
# ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!

As this is a test container, I've only one user here, so I don't need too difficult sudo config, but you can deal with groups etc. in case of need. Later I'll show you how.
Find these lines:
# User privilege specification
root ALL=(ALL) ALL

And add your admin user to under root, see:
# User privilege specification
root ALL=(ALL) ALL
lxc1op ALL=(ALL) ALL

Save and quit from visudo (like in vi editor, :wq)

Disable root from SSH:
cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
sed -i 's/#PermitRootLogin\ yes/PermitRootLogin\ no/g' /etc/ssh/sshd_config
service sshd restart



V.) Generate certificates for LDAP.

yast -> Security and Users -> CA Management

[Launch CA Management Module]

[ Create Root CA ]
CA name: vmcacert # It came from vm's CA certificate.
Common name: vm.lab (In most cases here is the FQDN of the server, for example: vm12.company.com)
E-Mail Addresses: ipl873@gmail.com
Organization: lab # In most cases here is the company name
Organizational Unit: sysops
Locality:
State:
Country: Hungary # Select your country.
[Next]
Type a pwd twice.
Key Length (bit): 2048 # This is the default, it's enough strong for test purposes. If you build real servers then choose 4096.
Valid Period (days): 3650
Advanced -> nsComment -> Delete the string: "YaST Generated CA Certificate"
Next, Create.

Now export it for future use.
Select vmcacert, [ Enter CA ], push on [Advanced...↓] -> Export to File -> (x) Certificate and the Key Unencrypted in PEM Format -> File Name: /root/vmcacert.pem, 3x OK

Select vmcacert, and press [ Enter CA ], go to Certificates tab, [Add↓], Add Server Certificate.
Common name: vm # In most cases here is the FQDN of the server, for example: vm12.company.com. My host's name is simply vm.
E-Mail Addresses: ipl873@gmail.com
Organization: lab # In most cases here is the company name
Organizational Unit: sysops
Locality:
State:
Country: Hungary # Select your country
[x] Use CA Password as Certificate Password
Key Length (bit): 2048
Valid Period (days): 3649
Advanced -> nsComment -> Delete the string: "YaST Generated Server Certificate"
Next, Create.

The cert has been created.

│Status│Common Name│E-Mail Address  │Organization│Organizational Unit│Locality│State│Country
│Valid │vm         │ipl873@gmail.com│lab         │sysops             │        │     │HU

Select it, then [Export↓] -> Export to File -> (x) Certificate and the Key Unencrypted in PEM Format
Certificate Password: Add vmcacert's pwd here. -> File Name: /root/vm.pem, 3x OK, Finish, Quit.

cd /root

cp vm.pem vm_pub.pem
cp vm.pem vm_priv.pem
rm vm.pem
vi vm_pub.pem
# Keep the first part of the content starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----
vi vm_priv.pem
# Keep the second part of the content starting with -----BEGIN RSA PRIVATE KEY----- and ending with -----END RSA PRIVATE KEY-----

cp vmcacert.pem vmcacert_pub.pem
cp vmcacert.pem vmcacert_priv.pem
rm vmcacert.pem
vi vmcacert_pub.pem
# Keep the first part of the content starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----
vi vmcacert_priv.pem
# Keep the second part of the content starting with -----BEGIN RSA PRIVATE KEY----- and ending with -----END RSA PRIVATE KEY-----

As I'm using this cert and its CA for LDAP, I put them into /etc/ssl/certs/openldap. You can choose another destination, or you can create more CAs and certs.
But don't forget to use the proper file permissions to protect the private keys.
mkdir /etc/ssl/certs/openldap
mv vm*.pem /etc/ssl/certs/openldap
chown -R ldap:ldap /etc/ssl/certs/openldap
chmod 500 /etc/ssl/certs/openldap
chmod 400 /etc/ssl/certs/openldap/*


VI.) Set up OpenLDAP.

On vm: yast -> Network Services -> LDAP Server
Enable to install openldap2 package.

Start LDAP Server (x) Yes
Firewall Settings [x] Open Port in Firewall
Server type (x) Stand-alone server

Basic Settings
  [x] Enable TLS
  [x] Enable LDAP over SSL (ldaps) interface

CA Certificate File (PEM Format): /etc/ssl/certs/openldap/vmcacert_pub.pem
Certificate File (PEM Format): /etc/ssl/certs/openldap/vm_pub.pem
Certificate Key File (PEM Format - Unencrypted): /etc/ssl/certs/openldap/vm_priv.pem

Next takes you to Basic Database Settings
Database Type: hdb
Base DN: dc=vmldap
Administrator DN: cn=Administrator [x] Append Base DN
LDAP Administrator Password: add a strong pwd here...
Validate Password: ...and again.
Database Directory: /var/lib/ldap
[x] Use this database as the default for OpenLDAP clients
Next.

Note: save this info, you'll need it later.
Start LDAP Server: Yes                                                                                                                             │
Register at SLP Service: No                                                                                                                        │
Database Suffix: dc=vmldap                                                                                                                         │
Administrator DN: cn=Administrator,dc=vmldap
Finish

Check OpenLDAP:
ps -ef | grep ldap
cat /var/log/messages | grep slapd
telnet localhost 389
telnet localhost 636
telnet vm 389
telnet vm 636

Check OpenLDAP with LDAP browser.
yast -> Network Services -> LDAP Browser
[Add] a new client config here,
Enter the name of the new LDAP connection: vmldap
LDAP Server: vm
Administrator DN: cn=Administrator,dc=vmldap
LDAP Server Password: Type your LDAP srv pwd here
[ ] LDAP TLS # You can set up TLS connection if you wish, of course. I wanted to try LDAP first without TLS.

Now you're able to browse LDAP database, it's almost empty ...yet.

Setup vm for LDAP auth.

yast -> Network Services -> LDAP Client
Accept the 4 packages need to be installed.

User Authentication (x) Use LDAP
Addresses of LDAP Servers: 127.0.0.1
LDAP Base DN: dc=vmldap
Secure Connection [ ] LDAP TLS/SSL # You can set up TLS connection if you wish, of course. I wanted to try LDAP first without TLS.
[x] Start Automounter
[x] Create Home Directory on Login

[Advanced Configuration...] -> Administration Settings
Configuration Base DN: ou=ldapconfig,dc=vmldap
Administrator DN: cn=Administrator,dc=vmldap
[x] Create Default Configuration Objects
[x] Home Directories on This Machine
[Configure User Management Settings...], 2x OK


VII.) Setup an LDAP user.

yast -> Security and Users -> User and Group Management

Select Groups, [Set Filter↓], LDAP Groups
[Add], Groupname: lxcops, don't modify anything, press OK.

Select Users, [Set Filter↓] LDAP Users
[Add], First Name: Johnny, Last Name: Bravo, Username: jbravo, give the pwd twice for this user, and select "Details".
Home Directory Permission Mode: 750
At "LDAP Groups", [x] lxcops # Johnny will be an operator of the SLES containers.
2x OK.

cat /etc/passwd | grep jbravo # jbravo does not exist as a standard un*x user.
id jbravo # But it exists as an LDAP user.
uid=1003(jbravo) gid=100(users) groups=1000(lxcops),100(users)
ls -ld /home/jbravo # jbravo's home exists.
drwxr-x--- 5 jbravo users 1024 Dec 26 17:31 /home/jbravo

Check this user in yast's LDAP Browser, at "LDAP Connections" press the down key, select "vmldap" and press enter.
Provide the LDAP admin pwd and press OK, then open "dc=vmldap", you'll see this:
│─┬─dc=vmldap
│ ├──ou=group
│ ├──ou=ldapconfig
│ ├┬─ou=people
│ │└──uid=jbravo



Now it's clear LDAP authentication is working as expected. Not necessary to setup LDAP Client on your host machine (vm in my case).
You can manage LDAP users/groups from any OS if you set up "LDAP Client" and "Administration Settings" (see above how).
I chose another scenario, here I'm managing my users only from vm. The clients (3 LXCs) can authenticate to vm (through TLS), but that's all,
no LDAP management on my LXCs.


VIII.) Setup LDAP authentication on lxc1.

Share the CA and the server cert you've already created on vm.

cd /etc/ssl/certs/openldap/ # Copy the certs from vm to lxc1.
scp *pub.pem lxc1op@lxc1:/tmp

Log in to lxc1 as lxc1op, edit /etc/hosts: # Or setup your nameserver to provide the addresses to domains, I don't need this as I have this small environment only.
127.0.0.1     localhost
192.168.2.2   vm
192.168.2.3   lxc1
192.168.2.4   lxc2
192.168.2.5   lxc3

cd /tmp
mkdir /etc/ssl/certs/openldap
mv *pub.pem /etc/ssl/certs/openldap
chmod 755 /etc/ssl/certs/openldap
chmod 644 /etc/ssl/certs/openldap/*

yast -> Network Services -> LDAP Client
Accept the packages need to be installed.

User Authentication (x) Use LDAP
Addresses of LDAP Servers: vm
LDAP Base DN: dc=vmldap
Secure Connection [x] LDAP TLS/SSL
[x] Start Automounter
[x] Create Home Directory on Login

Press [Advanced Configuration...]
Certificate Directory: /etc/ssl/certs/openldap
CA Certificate File: /etc/ssl/certs/openldap/vmcacert_pub.pem
Here go to "Administration Settings" if you want to manage LDAP users/groups from lxc1, I don't want this, as I'm managing LDAP from vm.

That's all, lxc1 has been successfully set up to authenticate users to vm.

Enable lxcops group to switch to root.
Start visudo, go to the end of the file and append the below two lines:
# The members of lxcops can switch to root
%lxcops ALL=(ALL) ALL

Save and check it:
cat /etc/sudoers | tail -2

Now the final test comes. Log in to lxc1 with jbravo, you will see this:

Creating directory '/home/jbravo'. # Whooops, another problem. Or not. How to deal with home directories?

Switch to root by running: sudo su -
Provide jbravo's pwd, now you're root on lxc1.



I set up
[x] Create Home Directory on Login
as my test user, and the members of lxcops LDAP group are the admins of my containers, but I don't want to store any data for my container admin(s).

What if you need a home that's accessible from anywhere? Well it's a good idea and possible, of course.
Install NFS on vm, and export /home, or create an independent file system with LVM under /export/home (for example).

Export this home via NFS ("no root squash" is a mandatory!!!).
Mount this exported filesystem on vm and on your containers. For example: vm:/export/home /mnt/vmhome

On vm, at yast's "User and Group Administration" -> "Users──Groups──Defaults" for "New Users" edit "Path Prefix for Home Directory", change it from /home to /mnt/vmhome
Don't forget to disable [ ] Create Home Directory on Login
From now on, if you create a user (the existing ones can be modified) its home will be /mnt/vmhome/$USER and this NFS exported home will be available from each container.

I don't need this so I skipped this possibility, but nobody knows the future.

I got what I wanted, now I have a SLES with 3 LXCs (also with SLES), with LDAP (TLS) authentication, with working sudo. Now I'm gonna install some MW apps ;-)

Nincsenek megjegyzések:

Megjegyzés küldése